PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 25th of March, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:27:34] <chdog44> I have a document and one of the field values is a document. This document also has documents in its fields and the field names are SimpleDateFormat strings. How can I access the fields in the deepest sub document during a MapReduce? My current code is:
[00:27:36] <chdog44> if (this.night[this.date] && this.night[this.date] !== null && this.night[this.date] !== "undefined")
[00:27:36] <chdog44> val = this.night[this.date].values;
[00:27:36] <chdog44> else
[00:27:36] <chdog44> val = 0;
[00:29:14] <chdog44> I'm pretty much a beginner at MongoDB and I'm accessing it through Jasper which I'm equally new to so I'm not sure why that's giving me a exception: TypeError: Cannot read property '2014-02-20' of null near 'this.night[this.date] && this.night'
[00:29:28] <chdog44> 2014-02-20 is the default value I'm passing through for the date
[00:29:45] <chdog44> any input is greatly appreciated :) :)
[00:33:39] <NaN> chdog44: is that Jade code?
[00:33:46] <NaN> I mean, Jasper code
[00:35:16] <chdog44> it's JS within the Jasper Mongo query language
[00:35:23] <chdog44> it's part of my Map function
[03:50:12] <BadHorsie> Hey, I have a collection like [{sName:"something",logs:[{startepoch:1,name:"thing1"},{startepoch:2,name:"thing2"},{startepoch:3,name:"thing3"}]}],I'm wondering if there is a way to use $splice so that I retain the 2 latest records from collection.logs...
[03:59:10] <joannac> I think you mean $slice
[03:59:22] <joannac> Doesn't $slice: -2 do what you want?
[06:32:23] <Soothsayer> Anyone here with first-hand experience of using Cube for Times Serries Data in MongoDB ? http://square.github.io/cube/
[06:43:42] <kPb_in> Hey Guys.. The command db.printSlaveReplicationInfo() give me negative values like "-10 seconds ago" on slaves and positive values on masters.. What does this mean?
[07:02:30] <mylord> i heard querying by mongodb indexes is slow. is that generally true? and if so, how should i insert, say, an auto-incrementing id?
[07:46:58] <mylord> how to prevent adding item to array with same value for eg name. eg, if name: “bot1” exists, don’t update?
[07:51:06] <Soothsayer> mylord: look at the array operators
[07:51:09] <Soothsayer> add to set or something
[07:51:24] <mylord> addToSet, yup
[08:19:33] <bin> guys i know thatprobably it's java specific but if i have returned embedded object how do i deserialize it to normal object
[08:44:55] <BogdanR> Hello
[08:46:06] <BogdanR> I have some databases in a replica set and a completely separate database on a different server. I would like to migrate this separate database to be part of the replica set so I can migrate it without downtime. How can I do this?
[09:40:13] <dandre> hello,
[09:40:39] <Soothsayer> hello
[09:41:44] <dandre> I am trying to use mongo shell in a script and issuing a javascript command file using HERDOC. I don't want to put my file in a temporary file so how can I do?
[09:42:25] <dandre> I have tried --eval with no success and mongo command expect a file name ending with .js
[10:15:35] <kAworu> can I get the number of items pulled using the $pull operator ?
[10:16:16] <kAworu> I do $pull several items from many documents from a single collection. update gives the number of document updated but what I really want is the number of pulled items.
[10:18:35] <Nodex> not possible
[10:19:03] <kAworu> Nodex: alright thank you.
[10:24:29] <dandre> How can I check whether the admin database contains users?
[10:25:29] <dandre> if there is at least one user, and I issue db.system.users.find().count() without being authenticated I get an error message
[10:26:10] <dandre> but I don't know how to get only the error condition without printing it
[10:27:31] <dandre> I am running my script with:
[10:27:32] <dandre> mongo myscript.js
[11:17:32] <jakob_> Hi all.. quick question.. Where can I find information about how to debug conflicting mods errors? I'm doing the simplest update queries but still getting it
[11:24:45] <Nodex> can you pastebin your update and the error?
[11:25:04] <Nodex> else wait 58 seconds for no reply and leave the help chan - that works too
[11:55:08] <mylord> is mongo a relative poor or good choice for site with lots of concurrent writes (eg 1-10k/s inserts)? ie, is this scenario mongodb’s relative strength or weakness? why?
[11:55:40] <Nodex> depends on your requirements apart from just writing data
[12:00:15] <mylord> say i’ll have 10k concurrent game users, and i might want to insert what button the clicked ever 1 second. that db/collection is mostly used to for later security logging, so it won’t be read often.
[12:00:44] <mylord> but another db will save level scores even ~1 minute, and will be both written and read equally, for the ~10k concurrent
[12:01:03] <Nodex> I wouldn't use mongodb for the inserts but I would for the aggregated data
[12:01:07] <Nodex> (personaly)
[12:03:41] <mylord> I’ve recently started on node.js for the api to recv such updates from my game (as opposed to using php which i’m more familiar with). afaik, node.js works more naturally with mongodb as opposed to some other dbs.. in this case, if i keep node.js, what db might you recommend?
[12:03:59] <Nodex> redis/memcached
[12:04:02] <mylord> i also just started dev’ing with node.js for this little api, so i can change that also
[12:05:40] <mylord> if i use monggo, i’ll want to shard a lot to support concurrent writes, right? iwhat do i shard on? how much HDD overhead is there etc, generally?
[12:06:25] <Nodex> what do you mena by HDD overhead?
[12:06:27] <Nodex> mean*
[12:09:00] <mylord> iirc, mongodb usually uses a large amount of disk space per db. is that right? how does that factor in?
[12:09:17] <Nodex> I don't follow what you're asing
[12:09:19] <Nodex> asking*
[12:11:50] <mylord> i’ll want to shard my db’s to provide more writers, right? when i shard, does that setup another db, with whatever large(?) overhead that implies?
[12:12:08] <mylord> is it common/useful to create multiple VMs just to shard, just to hve multple writers?
[12:12:12] <Nodex> of course, that's implied by adding more infrastructure
[12:12:41] <Nodex> VM's add their own overhead
[12:14:38] <kali> mylord: sharding on the same physical host has a limited interest, but i would not use VMs to do it. just start multiple processes
[12:16:29] <mylord> kali: how limited? for example on this SP-64 machine I use, in the case I outlined roughly above, how many shards on same physical machine might i try? http://www.ovh.com/us/dedicated-servers/enterprise/
[12:17:26] <kali> mylord: one
[12:18:04] <mylord> if i grow some, and have 100k concurrent users, with inserts every 1 minute (for game-end scores), how many such physical machines would i need, roughly? 0.1? 1? how about 1 insert per 1 second (100k inserts/second)?
[12:19:08] <kali> 10k insert/sec is accessible on an average laptop
[12:21:06] <kali> but anyway, you need to bench. my point was... there is usually little to be gained by sharding on the same physical host. it might be the case if you hit specific issues, but they tend to be less and less frequent as mongodb progresses
[12:38:22] <mylord> Nodex: why wouldn’t you use mongodb for the every-second movement/score-update insert in my example?
[12:39:10] <mylord> BTW, all, this is for a games platform where I’ll have scores/behavior updating for many games, and i’m hoping to allow for some growth
[12:39:24] <mylord> elastic search here mentions leaderboards and scores: http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis
[12:39:48] <mylord> in what ways might it be better for me, as i’ll have leaderboards etc, typical game turnys stuff
[12:40:33] <mylord> i’ll have users paying and receiving real money, so security and data-integrity is pretty important (more in terms of showing a would-be hacker didn’t really earn all that money)
[12:41:32] <mylord> i’ve read some things about mongodb data integrity, and also some debuning.. but if you guys have some general suggestions for me regarding use of mongodb or other suggestions in my scenarios, please tell..
[12:48:29] <Nodex> mylord : I wouldn't use it because I would use somehting with higher throughput. I do not see the need for the overhead in terms of diskspace / ram useage (for indexes) and ergo would use something else. Again this is ONLY my opinion
[12:51:24] <Nodex> http://www.phoronix.com/scan.php?page=news_item&px=MTY0MTU ... haha, these guys that claim these sorts of things "what that project mean for PostgreSQL - 99.9% percent of projects don't need MongoDB and could be nicely implemented in relational model."..... really don't understand it's not just about modeling data
[12:55:04] <kali> yeah. we just hate their guts.
[12:55:30] <Nodex> in fairness the postgres thing looks promising
[12:56:49] <kali> i still hate it :)
[13:34:13] <lorgio> Hello everyone
[13:35:26] <lorgio> I have copied a database from our remote staging machine, but it is now out of date. I really only want a subsection of a collection to add to my local collection. Is there a way to do this?
[13:36:35] <cheeser> lorgio: http://docs.mongodb.org/manual/reference/method/db.cloneCollection/#db.cloneCollection
[13:39:00] <lorgio> This looks like it would work on a single database...but i'm trying to copy data from a collection on the the remote MongoDB to my local instance in the same collection.
[13:39:13] <lorgio> almost like an append from the remote server to my local instance
[13:39:30] <lorgio> oh wait....
[13:40:07] <lorgio> cheeser: that looks like it may work...thanks!
[13:41:39] <cheeser> lorgio: good luck. :D
[13:59:54] <Diplomat> hey guys
[14:00:12] <Diplomat> i was wondering.. can you suggest me any tools that could help me import my mysql database to mongo ?
[14:00:21] <Diplomat> or i should make my own program for that ?
[14:01:33] <Nodex> best to write your own, doens't take long
[14:02:29] <Diplomat> true, but i have 130GB mysql database
[14:02:31] <Diplomat> :/
[14:02:38] <balboah> Diplomat: probably doesn't take more time to make one yourself as you are the one who knows how your data should be represented in mongo, you would still have to specify the mappings in a "tool". You might have several tables in mysql for example which you want to have one object of in mongo
[14:02:52] <Nodex> 1 column is the same as 100million columns in terms of structure
[14:02:53] <Diplomat> yeah true
[14:03:07] <Diplomat> i have like mm..
[14:03:13] <Diplomat> ~5 billion rows
[14:03:18] <Diplomat> or.. documents?
[14:03:21] <Nodex> 100 zillion
[14:03:25] <Nodex> doens't make a difference
[14:03:30] <Diplomat> ok that's good
[14:03:55] <Diplomat> im thinking right now what should be the best way to get those rows from my database or from a sql file
[14:03:56] <Diplomat> lol
[14:04:01] <Diplomat> 130gb is a lot to play with
[14:04:02] <Nodex> as balboah says, only you know your data, you're going from someone else's "best practices" to the reverse in MOngodb where you build your data around your app
[14:04:45] <Nodex> I would go from a database and record the PK in the document so if it gets interrupted you can start from there again. Then remove the PK if you need to later
[14:08:48] <zymogens> Anyone know what might be going on here. I have a VPS with UFW enabled - It blocks incoming by default. ‘ufw status’ says it is active and only allows connections on 22 from anywhere. It is blocking nginx without a problem. However my mongo shell on my home pc can connect remotely to it (the VPS) no probs - Not sure how that is possible? It seems like the firewall is letting some things through?
[14:09:35] <Nodex> by default mongodb is locked to 127.0.0.1 so have you also opened that?
[14:10:10] <zymogens> oh maybe not.. so it is actually connecting itself to my home mongod instance?
[14:10:41] <Nodex> that would be the more likely answer
[14:11:24] <zymogens> Just had a look and I am not running mongod… also when I reboot my VPS it is no longer able to connect.
[14:11:35] <Diplomat> chkconfig mongod on
[14:12:02] <Nodex> so you mustv'e allowed remote connections to mongo on your VPS?
[14:12:13] <Diplomat> what im thinking is.. i could do like..select x from y limit N,C
[14:12:15] <Diplomat> and save N
[14:12:27] <Diplomat> and just take all that data and push it into mongo
[14:12:44] <zymogens> @Nodex yes I believe so
[14:13:15] <zymogens> @Nodex am just not sure how the shell is getting past the firewall
[14:13:18] <Nodex> Diplomat : do it how you think is best
[14:13:19] <zymogens> the mongo shell
[14:13:30] <Diplomat> yeah i guess so :/
[14:13:38] <Nodex> netstat -pan | grep 27017
[14:14:28] <zymogens> tcp6 0 0 :::27017 :::* LISTEN 403/docker
[14:14:38] <zymogens> Might that docker is doing things...
[14:17:23] <Nodex> perhaps
[14:18:19] <zymogens> @nodex - am not sure… will have another look into it all. thanks anyway.
[14:19:48] <Diplomat> guys, any suggestions.. one table is http://puu.sh/7IUtz.png
[14:19:54] <Diplomat> which contains ids from another table
[14:19:59] <Diplomat> any ideas how i could format it?
[14:20:25] <kees_> yeah, use mysql
[14:20:54] <Diplomat> well.. running ~5 billion rows on mysql
[14:20:57] <Diplomat> and it will increase a lot
[14:21:04] <Diplomat> not that good idea
[14:21:11] <Nodex> Diplomat : it really depends on how you want to access your data
[14:21:53] <kees_> aye, i don't have 5B+ rows in mysql, but my server with ~1.5B rows runs nice on quite slow hardware
[14:22:06] <Nodex> *yawn*
[14:22:15] <Diplomat> well the process works like.. i enter the name of the item.. and then it will take id from table A and then then takes that id and looks for matches from table B
[14:22:50] <Diplomat> kees_ : my database will contain 100+ billion rows eventually
[14:22:58] <Diplomat> So I guess we can't compare our situations here
[14:23:18] <Nodex> how often does the name of the item change?
[14:23:23] <Diplomat> never
[14:23:40] <kees_> should store it in one collection than
[14:23:47] <Nodex> then I would store the name in the collection
[14:23:55] <Nodex> even in Relational I would do that
[14:24:23] <Diplomat> well i didnt do it because it would take tons of space
[14:24:35] <Diplomat> since names can be 10 characters or 200+ characters
[14:24:35] <Nodex> no more than another table with X columns
[14:24:39] <kees_> yes it does, but mongo isn't a relational database
[14:24:50] <Diplomat> and one item can have 100+ matches which all have names
[14:24:54] <Diplomat> 10 to 200+
[14:25:37] <Diplomat> that's why i didnt do it i'd waste tons of space that way but obviously in mongo i'd have to do that
[14:25:45] <Diplomat> or i could use row's ID
[14:25:47] <kees_> and how many names will there be?
[14:25:47] <Nodex> then my document might look liek this... names:['A','B','C','D']
[14:26:01] <Diplomat> kees_ 100 billion +
[14:26:03] <Diplomat> ;)
[14:26:13] <Nodex> ok to many cooks, good luck, shout if you get stuck
[14:26:24] <kees_> a 100Bx100B table?
[14:27:26] <Diplomat> http://puu.sh/7IUPj.png
[14:27:32] <Diplomat> that's the idea
[14:28:48] <kees_> i would still do it in one collection like { key:item-name-1, matches [1,2,3,4] }
[14:28:48] <Diplomat> imagine it like a store.. with categories
[14:28:53] <Diplomat> each category has products
[14:28:58] <Diplomat> but each product can have multiple categories
[14:29:25] <Diplomat> yes kees
[14:29:57] <kees_> or: {product, categories['a','b','c','d']}
[14:30:00] <Diplomat> that's a brilliant idea because then i can do CATEGORY [ product1, product2, product3 ], CATEGORY2 [product 1, product5, product7]
[14:30:29] <Diplomat> as you can see product1 has 2 categories
[14:30:41] <Diplomat> so what i would have to do is
[14:30:43] <kees_> depends on what is the biggest i'd say, categories can have tons of products, but products usually don't have a lot of categories
[14:30:51] <Diplomat> search for product1 and it gives me category and category2
[14:31:10] <Diplomat> well.. in my store products can have 10000+ categories
[14:31:15] <Diplomat> :D if we think it so
[14:31:58] <Diplomat> that's the problem
[14:33:44] <kees_> anyway, afk, meeting :)
[14:34:46] <Nodex> [14:25:33] <Nodex> then my document might look liek this... names:['A','B','C','D']
[14:35:40] <Diplomat> one moment ill show you what im thinking :P
[14:36:02] <Nodex> you just explained what you're thinking
[14:36:08] <Nodex> one product can have X categories
[14:36:22] <Nodex> which is exactly what that is
[14:37:02] <Diplomat> http://pastebin.com/bTNuMgTf
[14:37:13] <Diplomat> as you might see product 1 is in 2 categories
[14:37:21] <Diplomat> but in each category it has different details
[14:38:03] <Nodex> categories:[{"label":"Foo", "description":"Bar baz"}]
[14:38:22] <Nodex> categories:[{"label":"Foo", "description":"Bar baz"},{"label":"ANother cat","description":......}]
[14:38:39] <Diplomat> pretty much yes
[14:39:14] <Diplomat> and then i have to search for "product1"
[14:39:17] <Diplomat> and it returns me
[14:39:19] <Diplomat> category name
[14:39:26] <Diplomat> and details for each category
[14:40:25] <Nodex> ok lol
[14:41:03] <Diplomat> http://pastebin.com/12eTp1X5
[14:41:48] <Diplomat> that's how i'd like it to be :D
[14:42:08] <Nodex> I think you're trying to store product data inside the products and not have it in another collection also?
[14:42:44] <Diplomat> well it doesnt have to be yes
[14:43:05] <Diplomat> because product's data will or can be different for each category
[14:43:25] <Nodex> you still seem to be approaching this from a relational POV
[14:44:00] <Nodex> do you need to display the "plucked" data (description or whatever) on a product details page or something?
[14:44:26] <Diplomat> yes
[14:44:39] <Nodex> ok so say as part of an ecommerce system?
[14:44:40] <Diplomat> i need to list categories for that product and display details
[14:44:56] <Diplomat> well this estore idea was just a good example how im going to use this data
[14:45:07] <Nodex> so I goto your app, click on a product and see different categories?
[14:45:13] <Diplomat> yup
[14:45:15] <Nodex> and how are those categories defined?
[14:45:21] <Nodex> (on the page)
[14:45:40] <Diplomat> hm what do you mean ?
[14:46:05] <Diplomat> http://pastebin.com/12eTp1X5 if you look this we can play it like.. i need my data in JSON format like I have shown there
[14:46:39] <Nodex> but you can't query keys in Mongo
[14:46:47] <Nodex> or the values of keys at least
[14:47:42] <Diplomat> but then i could do like
[14:48:23] <Diplomat> product1: { "category1": {data:blabla, data2:blabla}, "category2": {data:blabla, data2:blabla}
[14:48:43] <Diplomat> but im going to update this product a lot then
[14:48:52] <Diplomat> mostly add new categories to it
[14:49:58] <mylord> how much faster will mongo run on a typical physical server than on the fastest option VMs, in Fedora, for example? any very rough idea helps.. ie, is it 10-30% slower, or much slower than that?
[14:51:56] <Diplomat> why do you think it's slower in VM ?
[14:52:09] <Nodex> mylord : it's really dependant on your application / work flow
[14:55:18] <Diplomat> But nodex.. then I should just add like..
[14:55:39] <Diplomat> {product, category, feature1, feature2, feature3}
[14:55:39] <Nodex> like or add ?
[14:55:40] <Diplomat> and that's it ?
[14:56:25] <Diplomat> it could be a good idea, but it would eat tons of space :P
[14:57:03] <Nodex> disks are cheap
[14:57:21] <Diplomat> true but it would change my 130GB into 1.3TB
[14:57:45] <Diplomat> and when i add RAID 10
[14:57:49] <Nodex> nobody said it was perfect
[14:57:56] <Diplomat> then it's not very cheap :D
[14:58:12] <Nodex> depends where you get your 1.3tb from
[14:59:01] <Diplomat> oh another idea
[14:59:05] <Diplomat> when im going to write again
[14:59:27] <Diplomat> using the same product and same category (if they are set as indexes) does it write over the row or updates the values ?
[14:59:58] <Diplomat> like.. do i have to do update query or writing it over would be possible? i think it works like that in cassandra
[15:00:16] <Nodex> there is no joins or related data so if you update a document's values don't expect it to update every other document value with the same
[15:01:03] <Diplomat> hm
[15:01:21] <Diplomat> ok
[15:01:38] <Diplomat> i guess i know what im doing then
[15:01:41] <Nodex> that's not to say you can;t issue a multi:true query and have it update them
[15:05:49] <Diplomat> ok brilliant now i know what im doing
[15:06:49] <Nodex> I need to clarify. You can update ALL documents that match a query with multi:true, you cannot update one document and expect it to propogate those values around
[15:07:01] <Diplomat> yes i do understand that
[15:07:38] <Diplomat> the thing is
[15:08:39] <Diplomat> hmm
[15:24:38] <saml> can i sort by size of a sub document?
[15:25:07] <saml> db.pages.find().sort({'body': {$size: -1}}).limit(10) 10 largest pages.. for example
[15:25:48] <kali> saml: you can't with a find(), but you can with an aggregation pipeline
[15:37:35] <mylord> how can I make a performant auto incrementing _id for all items?
[15:38:02] <mylord> performant in terms of lookup. i suppose i’d want an int for that
[15:38:22] <mylord> but is there a facility to auto add that on insert, etc?
[15:38:32] <Nodex> you would have to findAndModify with an $inc
[15:38:44] <Nodex> else use the built in ObjectId that adds it on insert
[15:38:58] <kali> +1. autoincrement is a bad idea. use ObjectId :)
[15:39:04] <mylord> that ObjectId is that long _id string? is that slow to lookup tho?
[15:39:25] <kali> it's not a string, it's a 128bit number
[15:39:31] <mylord> k
[15:39:48] <mylord> and its pretty performant in lookup?
[15:39:50] <kali> yes
[15:39:54] <mylord> k, great
[15:40:14] <Nodex> it comes with a free index for last/first inserted documents
[15:41:25] <kali> yeah, and a rough creation timestamp
[15:46:23] <Ramjet> Q: Are there any "Sets of databases used for functional testing.", i.e., *public* MongoDB databases of varying structure/complexity used for functional/regression testing of MongoDB itself and the MongoDB command line utilities. I have a cmd line utility I need to functional test...
[15:46:48] <Nodex> there used to be the enron emails somewhere
[15:50:14] <Ramjet> enron?
[15:51:53] <palominoz> hello there. i implemented localization through documents like { locales: { en: { name:"car" }, it:{name: "automobile" }}}. i need to fulltext search those localized string. had a look at mongodb docs but i didnt understand if i am able to index the dinamically addedd / removed locale objects that represent available locales of the parent document
[16:01:01] <tkeith> Is findOne({_id: SOME_ID, otherField: SOME_VALUE}) as fast as findOne({_id: SOME_ID}) ?
[16:01:56] <Diplomat> good luck to me
[16:02:07] <Diplomat> it took 53 seconds to order 5 rows :| from mysql
[16:02:15] <Diplomat> using 2 joins just to get all that data
[16:03:04] <Diplomat> nvm, i went full retard.. i used join instead of inner join
[16:05:57] <Ramjet> <palominoz> I thing you can index any field, sparse or not. You will need different indexes for each of [ 'locales.en', 'locales.it', 'locales.XY', ... ]
[16:10:19] <Ramjet> Q: Are there any "Sets of databases used for functional testing.", i.e., *public* MongoDB databases of varying structure/complexity used for functional/regression testing of MongoDB itself and the MongoDB command line utilities. I have a cmd line utility I need to functional test, preferably by downloading file(s) from mongodump.
[16:13:11] <cheddar> Is there a bulk insert operator that will update objects that duplicate an ID in the DB instead of throwing an error?
[16:24:16] <Ramjet> cheddar will the multi keyword work with an update() doing an upsert? See http://docs.mongodb.org/manual/reference/method/db.collection.update/#multi-parameter
[16:28:34] <palominoz> @Ramjet ok, very clear, thank you.
[16:32:45] <Ramjet> @cheddar will the multi keyword work with an update() doing an upsert? See http://docs.mongodb.org/manual/reference/method/db.collection.update/#multi-parameter
[16:52:57] <cheddar> Ramjet, I don't think so. It looks like that is telling mongo that it is ok to change multiple documents. I have batches of documents that I am inserting (say 1k at a time). I am specifying an _id and if I have _id collisions, I want mongo to just update the document instead of error
[17:18:07] <Alexei_987> Hello! What do you think about the theoretical possibility of adding python as a server side language for doing map reduce?
[17:23:10] <cheeser> i think you should use hadoop streaming instead.
[17:34:25] <Alexei_987> cheeser: thanks that could be an option. However I'm worried that serializing/deserializing data would create a big overhead for something more complex than wordcount.
[17:36:21] <cheeser> Alexei_987: i agree but if you're set on using python...
[17:40:22] <arunbabu> Is it possible to get part of document in a query?
[17:40:51] <arunbabu> I have a collection with object which has two attribtutes className, students[]
[17:41:08] <arunbabu> I want to get students[] of all objects. Is that possible?
[17:44:55] <alnewkirk> I'm experiencing some weirdness while trying to setup replica-sets
[17:45:13] <alnewkirk> ... rather I should say, I'm automating the configuration of replica-sets
[17:45:26] <alnewkirk> I get the following error on all machines: { "errmsg" : "couldn't initiate : can't find self in the replset config", "ok" : 0 }
[17:46:53] <cheeser> arunbabu: http://docs.mongodb.org/manual/reference/method/db.collection.find/ look at the projection document
[17:48:03] <arunbabu> cheeser: thanks!
[17:49:23] <cheeser> np
[19:02:40] <quantalrabbit> I want to replace the contents of a field in a collection using a regex backreferences. Is this possible with db.colleciton.update? say i want to cat1, cat2, cat3 to become dog1, dog2, dog3. does something like db.animals.update({'name': s/(cat)(\d)/dog\2/ }) work?
[19:04:59] <rkgarcia> quantalrabbit: i understand a regex as a operator, then you need test
[19:05:07] <rkgarcia> teorical it works
[19:14:48] <LoneSoldier728> anyone know why when I call populate I get another user's collection? (mongoose)
[19:17:29] <LoneSoldier728> why is this channel always dead
[19:17:53] <LouisT> that's just irc
[19:17:57] <LouisT> people have lives, you know..
[19:18:42] <LoneSoldier728> ya i know where are the ones that live in irc heh
[19:19:10] <ranman> LoneSoldier728 ...
[19:19:16] <LoneSoldier728> yes?
[19:19:42] <ranman> I was contributing to the deadness of the channel
[19:19:54] <LoneSoldier728> oh heh
[19:19:55] <LoneSoldier728> http://stackoverflow.com/questions/22610776/populate-issue-with-mongodb
[19:19:56] <ranman> LoneSoldier728: also to answer your question I think we need more info
[19:20:10] <cheeser> the spirits suggest your problem is due to invalid mongo URIs.
[19:20:21] <LoneSoldier728> like everything is good except that if I create user b
[19:20:29] <LoneSoldier728> then I got user's a stories array
[19:21:22] <LoneSoldier728> what do you mean invalid URIs?
[19:21:57] <cheeser> just a stab in the dark
[19:22:15] <cheeser> chicken bones are notoriously inscrutable.
[19:22:36] <qutwala> hey, everybody
[19:23:05] <LoneSoldier728> hey
[19:23:31] <BadHorsie> Is it me or Perl's MongoDB driver does not handle update with $each and $slice?
[19:25:08] <cheeser> "i don't know of any reason why it wouldn't work" says the perl driver dev.
[19:25:22] <ranman> BadHorsie: do you have a gist ?
[19:32:52] <qutwala> i have few question's, actualy i'm new in mongo but already have few tasks with it from one are done, but left second. In first task i crawl raw data and push to raw collection. Later i should go over all docs in that raw data collection proccess it (it can be just new item, but some times i have increates some keys, for example `count` = 2 if item already exists) and push to another collection
[19:32:52] <qutwala> which stores clean data and also delete already proccessed data. I think the hardest part will be update'int and still have no idea how to realize, maybe there's any similar situation described somewhere? Sorry for "article". :)
[19:35:39] <ranman> LoneSoldier728: can you create a more reproducible scenario?
[19:35:57] <LoneSoldier728> hm how so
[19:36:15] <ranman> a minimum test case would include three documents you've inserted I think
[19:36:21] <ranman> 4 I guess
[19:36:25] <ranman> 2 users and 2 stories
[19:36:28] <LoneSoldier728> kk
[19:36:31] <ranman> 1 story that belongs to one user
[19:36:35] <ranman> and 1 story that belongs to another
[19:42:19] <LoneSoldier728> yeah
[19:42:28] <LoneSoldier728> im grabbing em about to have it rdy
[19:43:26] <ranman> LoneSoldier728: take your time
[19:49:10] <LoneSoldier728> http://pastebin.com/wUj9usEC
[19:49:13] <LoneSoldier728> ranman there
[19:49:30] <LoneSoldier728> two users, two stories, the populate route code and the schemas
[19:52:05] <LoneSoldier728> the issue is, when I call the route I get a, but when I switch to the other user the call still shows A, not sure how, the id changes to the correct user for the query so it makes no sense unless the populate is done incorrectly... and if I create user c, everything comes back empty as if c takes the new place
[20:09:20] <korishev> Hello all, I’m looking into tag sets with the ruby driver, but even though I set read_preference to :nearest, I get errors about ‘cannot combine tag_sets with PRIMARY’
[20:09:55] <korishev> it looks like the ruby driver is overwriting the read preference when I issue a comman (not a query). Anybody see this before?
[20:20:32] <tscanausa> is there a way to add a removed node of 1 replica set to another replica set?
[20:22:33] <LoneSoldier728> nvm was missing _id btw
[20:23:01] <ranman> LoneSoldier728: ah nice, glad you figured it out
[20:23:07] <LoneSoldier728> yea thank
[20:34:45] <BadHorsie> ranman: http://pastebin.com/nTg2sMUn
[20:39:13] <ranman> BadHorsie: what version of MongoDB are you on?
[20:53:30] <BadHorsie> ranman: 2.4.9 according to version() (unless that is the shell version)
[20:56:12] <BadHorsie> ranman: The library is $MongoDB::VERSION = '0.701.4'; It's very interesting that the '$each' is taken literally, if I comment the '$sort' and '$slice' lines it inserts the subdoc correctly and the '$each' is interpreted correctly
[20:56:31] <cheeser> from the shell: db.version()
[20:57:00] <BadHorsie> cheeser: Thanks, same result
[20:57:01] <cheeser> the shell and db versions are almost certainly the same unless you're doing snapshot testing
[20:57:17] <cheeser> i use a 2.4.9 shell against 2.5.x all the time and then they don't match
[20:57:47] <BadHorsie> Interesting
[20:58:07] <ranman> BadHorsie: you can get server version using: db.serverStatus()
[20:58:40] <kali> db.version() is db.serverStatus().version()
[20:59:14] <kali> (minus the last pair or parenthesis)
[20:59:18] <ranman> :)
[21:06:22] <ranman> BadHorsie: I'm on MongoDB-0.702.2 and I can't repro
[21:07:28] <BadHorsie> ranman: Thanks for taking your time, did you copy the exact script I had or did you modify anything?
[21:07:50] <ranman> BadHorsie: I modified the indentation, that was all, do you have an example initial document?
[21:09:14] <ranman> woah
[21:09:16] <ranman> just managed to repro
[21:10:54] <BadHorsie> I did this: db.serverStats.insert({serverName:"this",logs:[]});and then ran the script repeatedly... I think I will end running this on a cron or something: db.serverStats.update({serverName:'this'},{$push:{logs:{$sort:{lastPoll:1},$slice:-3}}}); ... I do not see any references to $slice or $sort on the .pm.... Only see refs for 'multiple' and 'safe'
[21:17:14] <ranman> well those are just operators so they're passed to the server
[21:17:23] <ranman> I can repro the behavior in the shell as well
[21:20:13] <ranman> {'serverName': 'this'},
[21:20:13] <ranman> {
[21:20:13] <ranman> '$push': {
[21:20:14] <ranman> 'logs': {
[21:20:15] <ranman> '$each': [{'logName': '/var/log/testlog', 'lastPoll': new Date()}],
[21:20:16] <ranman> '$sort': {'logs.lastPoll': 1},
[21:20:17] <ranman> '$slice': -2
[21:20:18] <ranman> }
[21:20:19] <ranman> }
[21:20:20] <ranman> }
[21:20:21] <ranman> )
[21:21:52] <ranman> sorry guys that was supposed to be a PM
[21:25:22] <ranman> BadHorsie: I think we've tracked this down
[21:25:25] <ranman> give me a few
[21:28:32] <ranman> heyo BadHorsie I've figured it out
[21:29:17] <ranman> perl doesn't order it's hashes
[21:29:22] <ranman> order matters for this command
[21:33:17] <ranman> BadHorsie: I would fix like this http://pastebin.com/p2B8kWx5
[21:45:55] <dexteruk> Hi, im new to mongodb and im looking to move over from mysql. What i dont know is what is best options. If you have a document that is variable in the number elements is it best to link via ID or embed the elements
[21:56:54] <BadHorsie> ranman: you rock, thank you so much
[21:57:04] <ranman> NP, comments like that make my day
[21:57:32] <ranman> if you feel super generous you can be all like "@jrhunt from @mongodb is a giant nerd who helps people on IRC"
[21:59:12] <BadHorsie> ranman: haha done :P
[22:21:32] <LoneSoldier728> can i use pull to pull an array of ids from an array of ids
[22:22:02] <LoneSoldier728> meaning {$pull: {arrayField: arrayOfIds}}
[22:23:02] <BadCodSmell> I'm doing this: update({id: 1}, {id:"1", value:123}, {upsert: true}) where id is a unique index. The type mismatch is an accident and I'm getting an error about duplicate keys...
[22:23:17] <BadCodSmell> Is this normal?
[22:25:04] <BadCodSmell> dup key: { : "5000" } this is the weird thing and I swear empty keys is off
[22:25:56] <BadCodSmell> It seems ok without the type mismatch
[22:28:54] <BadCodSmell> My guess is that here mongo just isn't so great at producing sane/relevant error messages
[22:35:31] <bob_123> hello, have a short question that I haven't had much luck with google finding the answer: with the aggregate function, is there a way for me to calculate the average of a list of values, but only taking into consideration the non-zero values?
[22:36:16] <bob_123> so something like: { $group: { averageValue: { $avg: "$value.count" } } } if not zero
[22:36:32] <joannac> BadCodSmell: not clear to me what you're actually doing. More information please?
[22:37:03] <joannac> bob_123: why don't you filter out the non-zero ones in your query clause?
[22:39:29] <ranman> LoneSoldier728: I think you might be looking for $pullAll
[22:39:36] <bob_123> @joannac, I also want to sum up another value in the document for the collections, I don't want to filter out zeros of one value because the other value might be non-zero for that document
[22:39:38] <LoneSoldier728> yeah just realized that
[22:39:40] <LoneSoldier728> thanks
[22:39:54] <bob_123> I tried to phrase that to make sense, sorry if it's a little confusing
[22:40:46] <bob_123> in the aggregate function, I want the sum of one value, average of another value, but average only for non-zero values for that field
[22:41:13] <bob_123> wondering it this is possible doing one aggregate call
[22:41:18] <bob_123> if*
[23:16:07] <lorgio> Are map-reduce methods suppose to work with partial data?
[23:16:48] <lorgio> I'm calculating my data by hand, and have noticed that they work out fine, but in the map-reduce i'm not getting the same output
[23:27:57] <LoneSoldier728> so addToSet adds it without allowing duplicates
[23:28:01] <LoneSoldier728> what about pushAll
[23:28:32] <LoneSoldier728> or if I wanted to do a pushAll without allowing duplicates do I do something else if that does not do it... because I tried addToSet with an array and I get an array within an array
[23:28:53] <LoneSoldier728> ah nvm says should use each