[00:17:54] <kfox1111> I have one process inserting periodically into a collection...
[00:17:55] <bjori> mongodb will implicitly create collections on-demand
[00:18:06] <bjori> and you want it to fail if the collection doesn't exist?
[00:18:41] <bjori> you would need to do some weird stuff with roles in 2.4
[00:18:47] <kfox1111> I have another one that I am having drop the collection. When dropped, I want the first process to fail insert, notice something happened, and choose the new collection.
[00:18:50] <bjori> but I don't think we have "can create collection" role yet
[00:19:34] <kfox1111> Long story short, I'm using capped collections, and need an easy mechanism to allow me to drop and recreate them with different sizes/etc and have the watchers pick up on it.
[00:21:21] <kfox1111> so no way to tell an insert to fail if collection does not exist?
[00:22:20] <bjori> no, not yet, the readWrite role allows you to create a collection
[00:23:23] <kfox1111> hmm.. so what I am thinking currently is, I have an "init" command that creates my capped collections. I have a process that listens on http and inserts into that collection, and I have a set of watchers that tail the collection.
[00:23:41] <kfox1111> When re'inited, it bumps a version, creates a new collection, and drops the old one.
[00:24:44] <kfox1111> the thing doing the inserting should notice the collection is gone and start writing to the new one. but it ends up recreating the old collection, and not even creating it as a capped collection causing things to misbehave.
[00:25:17] <kfox1111> a readWrite role wouldn't help in that case either, since it is the one doing the writing.
[00:25:50] <kfox1111> Because I'm making a new one, and I don't need the old one anymore.
[00:26:08] <bjori> well, since you don't care about the data anyway why do you need to resize it?
[00:26:30] <bjori> you could use ttl indexes to expire some data
[00:27:05] <bjori> you could create the new collection and then rename it to the old name
[00:27:17] <bjori> but there is probably a race condition there :)
[00:27:30] <kfox1111> that last bit was what I was trying to avoid.
[00:28:01] <kfox1111> There is some unique id's involved that need to be reset when rolling to the next collection.
[00:28:25] <kfox1111> Easily noticable when the collection name has the major version number in it.
[00:29:04] <kfox1111> The problem I'm trying to deal with is if I don't drop the collection, then the tailers will tail forever on a never going to change collection.
[00:29:49] <kfox1111> Once dropped, then when they try to recover, they notice the new db version and start tailing the correct collection.
[00:31:04] <kfox1111> The good news is, if I ensure the thing doing the insert is not running while I run the init program, it does work as expected, so I can work around it for now.
[00:31:49] <kfox1111> There still is a lacking feature there though where an insert/update needs to support a mode where collection creation is nonautomatic.
[08:43:41] <someprimetime> should I create an array of children comments inside one parent comment in my comments collection?
[08:44:20] <someprimetime> or create a new entry for each record and just mark them as having `parent_id` !== null
[08:44:38] <someprimetime> if not null then it'd be referencing another record which would be its parent
[08:58:47] <crudson> someprimetime: if comments can be indefinitely nested then keep them as top-level entities and link to parent. querying will be a headache otherwise.
[08:59:04] <someprimetime> crudson: ah forgot to mention it'll just be 1 level deep
[09:40:13] <Nodex> hilarious ... watch what happens to your ssh window
[09:42:10] <anton123> I have a collection users with telephone key, in this there are numeric values also string values. When I try to find any number which is numeric in the collection it does not show but the string values are coming in the search.
[09:42:51] <anton123> I used mongoimport to import from mysql
[09:43:25] <Zelest> any telnet that doesn't swallow my ^C should die in a fire.. no matter how cool they are :(
[09:51:02] <anton123> I am using mongodb with php and this is the array I am passing: $column='telephone';$collection->find(array($column => '9899980668'));
[09:51:47] <anton123> if i pass something like +91123456 which is a string it shows the result
[09:52:13] <anton123> but not 91123456 if its a numeric entry
[09:52:32] <crudson> reminds me, is almost Towel Day
[09:56:54] <Nodex> anton123 : you're passing a string to the query
[09:57:54] <Nodex> which means your values are probably saved/stored as strings
[09:58:43] <anton123> values are stored both in string and numeric format in the key telephone while i did mongoimport from mysql. those which had - or + were converted into string and others as numbers
[09:59:11] <anton123> how can i pass the search term as both numeric and string
[10:02:10] <anton123> actually there is an input box where I will pass something like 9898 and it should show both the numeric and string entries. I used regular expression /searchterm$/
[10:02:40] <anton123> it shows only non-numeric values
[10:03:11] <Nodex> you really should clean your data tbh
[10:05:34] <anton123> how can i change the data type of that key to string for all the data?
[10:07:21] <anton123> in telephone field if i search only 9 it shows me all the entries which are in string value having 9 in it. where as the phone numbers which are numeric and contain 9 doesnt show up
[10:07:35] <anton123> how can i show both numeric and non numeric in my search
[10:10:00] <anton123> let me try. Thank you for your help
[10:33:27] <Hounddog> hi there, what would be the best way to store json to a specified column? array or convert the json t string?
[10:34:01] <Nodex> depends how you need to access it
[10:35:05] <Hounddog> well i am recieving tweets and items from instagram so i have 3 fields, type, item_id and description in the description i just want to save to complete json recieved from twitter or instagram
[10:36:06] <Hounddog> am quite new to mongo so bare with me :)
[10:37:02] <Nodex> if you dont need to query the json then I would save it as a string
[10:37:46] <Hounddog> well not really, just have to send the json later back to the client but that can be a conversion also
[10:37:55] <Hounddog> thats why i save the id in item_id
[10:38:07] <Hounddog> of the tweet or of instagram the id i mean
[11:18:09] <remonvv> Does anyone know how mongos manages its connections? Meaning, if I make X connections to mongos will it make X connections to each shard resulting in SHARD_COUNT * X total connections on the mongos instance?
[11:18:35] <remonvv> Or will it start reusing connections at some point. I'm having trouble finding info on this.
[11:19:10] <spolu> hi, when running a find with readPreference: secondary, I see duplicate records coming from chunk that were recently migrated. When running the find on the primaries (no readPreference) the document appears only once. Anybody has a clue as to why?
[11:22:09] <remonvv> spolu, perhaps the scenario where the secondary of the old shard still has a copy because it hasn't reached the migrate in the oplog yet and the secondary of the new shard already processed the migration.
[11:22:42] <spolu> remonvv: after how long should I start to worry?
[11:23:48] <remonvv> No idea, that's speculation at best. I'd normally assume this wouldn't be possible (since it allows to return 2 documents with the same _id for example, which is the case here I assume).
[11:40:55] <quattro> I have a shared replica set where 1 collection/db is sharded (3x2 replica's)
[11:41:30] <quattro> i need more read capacity on one of the replica sets since there are some other collections on it that need more reads, can i just make it 5 replica's on one and 3 on the other?
[12:09:19] <Snebjorn> will $match take advantage of indexes after an $unwind? For example db.text.aggregate({$unwind: ...}, {$match: ...})
[12:24:47] <binnyg> I am unable to do a query with java driver. fineOne() works fine but find with a query object is throwing an error(java.lang.IllegalArgumentException: can't serialize class com.google.common.base.Present).
[12:25:32] <binnyg> any clue as to what's happening?
[12:28:33] <remonvv> The error tells you what's happening.
[12:28:52] <remonvv> You're putting a Present object in your DBObject and it can't serialize that.
[12:29:08] <binnyg> but why is it working with findOne()?
[12:31:19] <binnyg> may be I wasn't clear. I have a collection and I am trying to search the collection. I started with findOne() and can see the result. When I try to use find on collection with a query I get the above error.
[13:09:37] <fatih> I have field named "foo" with lots of subfields. I can test a subfield via {"foo.subfield" : {"$exists" : true}}, however {"foo.sub.field" : {"$exists" : true}} doesn't work
[13:09:49] <fatih> I mean the subfield can contain "dots"
[13:10:05] <fatih> therefore I tried {"foo.sub\uFF0Efield" : {"$exists" : true}}
[13:10:20] <fatih> however the the find() query doesn't return anything for this
[16:16:40] <LzrdKing> is there a way to retreive detailed statistics about a database from the mongod http interface? like what http://127.0.0.1:28019/listDatabases?text=1 provides, only more information than just sizeOnDisk, like number of collections, indexes, objects, etc
[17:07:48] <yann_ck> Hello Guys, Is it possible to aggregate items, group them to get a min and output the corresponding document id?
[17:11:37] <LzrdKing> is there a way to retreive detailed statistics about a database from the mongod http interface? like what http://127.0.0.1:28019/listDatabases?text=1 provides, only more information than just sizeOnDisk, like number of collections, indexes, objects, etc
[17:11:45] <yann_ck> I can do smthing like $project: { min: {$min: "$created_at"}}
[17:20:14] <yann_ck> I have something like Item.collection.aggregate({"$project" => { _id: 1, user_id: 1, created_at: 1}}, {"$group" => {:_id => "$user_id", "min" => {"$min" => "$created_at"}}})
[17:20:29] <yann_ck> but can't find how to outpout more fields in the $group
[17:22:02] <yann_ck> And even tried to put item_id: "$_id" in the group
[17:22:07] <yann_ck> and that output exception: the group aggregate field 'item_id' must be defined as an expression inside an object
[18:21:33] <jaCen915> kali: looking at this legacy code, using $where and $in seems redundant yes?
[18:21:34] <kali> jaCen915: it also used to raise huge concurrency problems (only one js interpreter per process) but that bit is supposed to be solved since 2.4
[18:22:20] <kali> jaCen915: $in will be quite fast, and the index will be able to optimize it and restrict the number of docs to pipe through the $where
[18:23:31] <jaCen915> kali: are you saying where is okay to use if restricting the doc results with $in or that $in by itself is good enough
[18:24:26] <kali> jaCen915: it depends what you do with the $where... $where allow to express virtually any condition whereas the native operators have a finite featureset
[18:39:37] <jaCen915> ain't no choice when it's someone else's code you have to manage ;)
[18:56:37] <orospakr> hey, I'm having a bit of trouble using the fancier query selectors when nesting: https://gist.github.com/orospakr/5638498 (I assume I've probably made a goofy mistake)
[18:56:50] <orospakr> basically, a query that presumably should match, doesn't.
[19:01:03] <kali> orospakr: wtf is "value1" there for ?
[20:38:58] <LzrdKing> wtf? somehow i created empty databases called collStats, connPoolStats, and listCommands just by curling some urls
[20:52:40] <ThePrimeMedian> can someone help me with a query? I am using mongoose.js (for node.js) here's my schema: http://pastebin.com/VKn2Q7Z3 but what I need is to find all Quest documents that match challenges.metric='user.sigin' and challenges.required >= 5
[20:52:58] <orospakr> is it possible to use $elemMatch on an array of strings rather than an array of objects? I'd like to use the $ operator and while using the flat { myarray : "value_to_match_in_array" } matches, the positional operator is not aware of it.
[20:53:49] <orospakr> (my goal is to replace a string in an array with a different string in a single update operation, if I can)
[21:05:57] <orospakr> (if given a string to match, $elemMatch complains that the parameter is invalid)
[21:14:21] <Derick> sorry, missed the question ThePrimeMedian
[21:16:48] <ThePrimeMedian> can someone help me with a query? I am using mongoose.js (for node.js) here's my schema: http://pastebin.com/VKn2Q7Z3 but what I need is to find all Quest documents that match challenges.metric='user.sigin' and challenges.required >= 5
[21:20:04] <Derick> ah, I don't know much about mongoose :-/
[21:20:23] <ThePrimeMedian> ok, in MongoDB how would you do it?
[21:23:35] <jaCen915> ThePrimeMedian: what's the name of the db you're using for this
[21:24:07] <jaCen915> and can you connect to mongo directly? or only via mongoose?
[21:25:06] <ThePrimeMedian> jaCen915: I can connect to mongodb; mongoose is just an ODM ontop of Mongoose. So if we can figure out howe to make this query, I can adjust it for Mongoose
[21:25:11] <ThePrimeMedian> my DB is named whitedrake
[21:26:23] <jaCen915> ThePrimeMedian: cool…I'm using mongoose right now, let me play around real quick, for now look up the mongo "find" and "gte"
[21:27:22] <ThePrimeMedian> jaCen915: cool. I know how to use MongoDB and MongooseJS - but I have never tried to query an object by it's name inside an array inside a document. and get the document..... know what I mean?
[21:27:52] <jaCen915> ThePrimeMedian: exactly :) i haven't either, I want to see if my ideas will work
[23:33:37] <iNick> Hi guys. We have a realization of someone else's stupid idea. We have 3x servers in a replica set and the hosts entered for the replicaset are static private nonrouteable IPs. Where is the replicaset info stored, since we only backup one database daily
[23:39:58] <redsand> iNick: its shared amongst them